Goto

Collaborating Authors

 contact force


ContactRL: Safe Reinforcement Learning based Motion Planning for Contact based Human Robot Collaboration

Mulkana, Sundas Rafat, Yu, Ronyu, Guha, Tanaya, Li, Emma

arXiv.org Artificial Intelligence

Abstract-- In collaborative human-robot tasks, safety requires not only avoiding collisions but also ensuring safe, intentional physical contact. We present ContactRL, a reinforcement learning (RL) based framework that directly incorporates contact safety into the reward function through force feedback. This enables a robot to learn adaptive motion profiles that minimize human-robot contact forces while maintaining task efficiency. In simulation, ContactRL achieves a low safety violation rate of 0.2% with a high task success rate of 87.7%, outperforming state-of-the-art constrained RL baselines. In order to guarantee deployment safety, we augment the learned policy with a kinetic energy based Control Barrier Function (eCBF) shield. Real-world experiments on an UR3e robotic platform performing small object handovers from a human hand across 360 trials confirm safe contact, with measured normal forces consistently below 10N. These results demonstrate that ContactRL enables safe and efficient physical collaboration, thereby advancing the deployment of collaborative robots in contact-rich tasks.


A Novel Approach to Tomato Harvesting Using a Hybrid Gripper with Semantic Segmentation and Keypoint Detection

Ansari, Shahid, Gohil, Mahendra Kumar, Maeda, Yusuke, Bhattacharya, Bishakh

arXiv.org Artificial Intelligence

Precision agriculture and smart farming are increasingly adopted to improve productivity, reduce input waste, and maintain high product quality under growing demand. These approaches integrate sensing, automation, and data-driven decision-making to improve crop yield and post-harvest quality (Gupta, Abdelsalam, Khorsandroo, and Mittal (2020)). In this context, autonomous robotic harvesting is a key enabling technology for horticulture, where labor shortages and high labor costs directly affect production and consistency. Despite progress in mechanization, many conventional harvesting methods (e.g., combine harvesters, reapers, and trunk shakers) are unsuitable for soft and delicate crops such as tomatoes and strawberries because large contact forces and impacts can bruise or damage the fruit (Cho, Iida, Suguri, Masuda, and Kurita (2014); Shojaei (2021)). Selective harvesting, where fruits are picked individually at the appropriate ripeness stage, is therefore preferred for high-value crops. However, selective harvesting remains challenging because a robot must (i) detect the target fruit under occlusion, (ii) estimate its pose and identify the pedicel cutting location, and (iii) execute grasping and detachment without damaging the fruit or plant. In real cultivation environments, tomatoes are often densely packed and partially occluded by leaves and branches, making perception and reliable manipulation difficult (Chen et al. (2015)). Consequently, integrated harvesting systems that combine compliant end-effectors, robust perception, and closed-loop control remain an active research topic (Comba, Gay, Piccarolo, and Ricauda Aimonino (2010); Ling, Zhao, Gong, Liu, and Wang (2019)). A wide range of end-effectors has been explored for harvesting and handling soft produce.


Gentle Object Retraction in Dense Clutter Using Multimodal Force Sensing and Imitation Learning

Brouwer, Dane, Citron, Joshua, Nolte, Heather, Bohg, Jeannette, Cutkosky, Mark

arXiv.org Artificial Intelligence

Dense collections of movable objects are common in everyday spaces-from cabinets in a home to shelves in a warehouse. Safely retracting objects from such collections is difficult for robots, yet people do it frequently, leveraging learned experience in tandem with vision and non-prehensile tactile sensing on the sides and backs of their hands and arms. We investigate the role of contact force sensing for training robots to gently reach into constrained clutter and extract objects. The available sensing modalities are (1) "eye-in-hand" vision, (2) proprioception, (3) non-prehensile triaxial tactile sensing, (4) contact wrenches estimated from joint torques, and (5) a measure of object acquisition obtained by monitoring the vacuum line of a suction cup. We use imitation learning to train policies from a set of demonstrations on randomly generated scenes, then conduct an ablation study of wrench and tactile information. We evaluate each policy's performance across 40 unseen environment configurations. Policies employing any force sensing show fewer excessive force failures, an increased overall success rate, and faster completion times. The best performance is achieved using both tactile and wrench information, producing an 80% improvement above the baseline without force information.


Discovering Self-Protective Falling Policy for Humanoid Robot via Deep Reinforcement Learning

Shi, Diyuan, Lyu, Shangke, Wang, Donglin

arXiv.org Artificial Intelligence

Humanoid robots have received significant research interests and advancements in recent years. Despite many successes, due to their morphology, dynamics and limitation of control policy, humanoid robots are prone to fall as compared to other embodiments like quadruped or wheeled robots. And its large weight, tall Center of Mass, high Degree-of-Freedom would cause serious hardware damages when falling uncontrolled, to both itself and surrounding objects. Existing researches in this field mostly focus on using control based methods that struggle to cater diverse falling scenarios and may introduce unsuitable human prior. On the other hand, large-scale Deep Reinforcement Learning and Curriculum Learning could be employed to incentivize humanoid agent discovering falling protection policy that fits its own nature and property. In this work, with carefully designed reward functions and domain diversification curriculum, we successfully train humanoid agent to explore falling protection behaviors and discover that by forming a `triangle' structure, the falling damages could be significantly reduced with its rigid-material body. With comprehensive metrics and experiments, we quantify its performance with comparison to other methods, visualize its falling behaviors and successfully transfer it to real world platform.


SafeFall: Learning Protective Control for Humanoid Robots

Meng, Ziyu, Liu, Tengyu, Ma, Le, Wu, Yingying, Song, Ran, Zhang, Wei, Huang, Siyuan

arXiv.org Artificial Intelligence

Bipedal locomotion makes humanoid robots inherently prone to falls, causing catastrophic damage to the expensive sensors, actuators, and structural components of full-scale robots. To address this critical barrier to real-world deployment, we present \method, a framework that learns to predict imminent, unavoidable falls and execute protective maneuvers to minimize hardware damage. SafeFall is designed to operate seamlessly alongside existing nominal controller, ensuring no interference during normal operation. It combines two synergistic components: a lightweight, GRU-based fall predictor that continuously monitors the robot's state, and a reinforcement learning policy for damage mitigation. The protective policy remains dormant until the predictor identifies a fall as unavoidable, at which point it activates to take control and execute a damage-minimizing response. This policy is trained with a novel, damage-aware reward function that incorporates the robot's specific structural vulnerabilities, learning to shield critical components like the head and hands while absorbing energy with more robust parts of its body. Validated on a full-scale Unitree G1 humanoid, SafeFall demonstrated significant performance improvements over unprotected falls. It reduced peak contact forces by 68.3\%, peak joint torques by 78.4\%, and eliminated 99.3\% of collisions with vulnerable components. By enabling humanoids to fail safely, SafeFall provides a crucial safety net that allows for more aggressive experiments and accelerates the deployment of these robots in complex, real-world environments.


Head Stabilization for Wheeled Bipedal Robots via Force-Estimation-Based Admittance Control

Wang, Tianyu, Yan, Chunxiang, Liao, Xuanhong, Zhang, Tao, Wang, Ping, Wen, Cong, Liu, Dingchuan, Yu, Haowen, Lyu, Ximin

arXiv.org Artificial Intelligence

Abstract-- Wheeled bipedal robots are emerging as flexible platforms for field exploration. However, head instability induced by uneven terrain can degrade the accuracy of onboard sensors (e.g., cameras) or damage fragile payloads. Existing research primarily focuses on stabilizing the mobile platform but overlooks active stabilization of the head in the world frame, resulting in vertical oscillations that undermine overall stability. T o address this challenge, we developed a model-based ground force estimation method for our 6-degree-of-freedom (6-DOF) wheeled bipedal robot. Leveraging these force estimates, we implemented an admittance control algorithm to enhance terrain adaptability. I. INTRODUCTION As robotics technology advances, wheeled bipedal robots are being increasingly deployed for agile exploration [1].


Robust Adaptive Time-Varying Control Barrier Function with Application to Robotic Surface Treatment

Kim, Yitaek, Sloth, Christoffer

arXiv.org Artificial Intelligence

Set invariance techniques such as control barrier functions (CBFs) can be used to enforce time-varying constraints such as keeping a safe distance from dynamic objects. However, existing methods for enforcing time-varying constraints often overlook model uncertainties. To address this issue, this paper proposes a CBFs-based robust adaptive controller design endowing time-varying constraints while considering parametric uncertainty and additive disturbances. To this end, we first leverage Robust adaptive Control Barrier Functions (RaCBFs) to handle model uncertainty, along with the concept of Input-to-State Safety (ISSf) to ensure robustness towards input disturbances. Furthermore, to alleviate the inherent conservatism in robustness, we also incorporate a set membership identification scheme. We demonstrate the proposed method on robotic surface treatment that requires time-varying force bounds to ensure uniform quality, in numerical simulation and real robotic setup, showing that the quality is formally guaranteed within an acceptable range.


Gentle Manipulation Policy Learning via Demonstrations from VLM Planned Atomic Skills

Zhou, Jiayu, Wu, Qiwei, Li, Jian, Chen, Zhe, Xiong, Xiaogang, Xu, Renjing

arXiv.org Artificial Intelligence

Autonomous execution of long-horizon, contact-rich manipulation tasks traditionally requires extensive real-world data and expert engineering, posing significant cost and scalability challenges. This paper proposes a novel framework integrating hierarchical semantic decomposition, reinforcement learning (RL), visual language models (VLMs), and knowledge distillation to overcome these limitations. Complex tasks are decomposed into atomic skills, with RL-trained policies for each primitive exclusively in simulation. Crucially, our RL formulation incorporates explicit force constraints to prevent object damage during delicate interactions. VLMs perform high-level task decomposition and skill planning, generating diverse expert demonstrations. These are distilled into a unified policy via Visual-Tactile Diffusion Policy for end-to-end execution. We conduct comprehensive ablation studies exploring different VLM-based task planners to identify optimal demonstration generation pipelines, and systematically compare imitation learning algorithms for skill distillation. Extensive simulation experiments and physical deployment validate that our approach achieves policy learning for long-horizon manipulation without costly human demonstrations, while the VLM-guided atomic skill framework enables scalable generalization to diverse tasks.


Force-Safe Environment Maps and Real-Time Detection for Soft Robot Manipulators

Dickson, Akua K., Garcia, Juan C. Pacheco, Sabelhaus, Andrew P.

arXiv.org Artificial Intelligence

Soft robot manipulators have the potential for deployment in delicate environments to perform complex manipulation tasks. However, existing obstacle detection and avoidance methods do not consider limits on the forces that manipulators may exert upon contact with delicate obstacles. This work introduces a framework that maps force safety criteria from task space (i.e. positions along the robot's body) to configuration space (i.e. the robot's joint angles) and enables real-time force safety detection. We incorporate limits on allowable environmental contact forces for given task-space obstacles, and map them into configuration space (C-space) through the manipulator's forward kinematics. This formulation ensures that configurations classified as safe are provably below the maximum force thresholds, thereby allowing us to determine force-safe configurations of the soft robot manipulator in real-time. We validate our approach in simulation and hardware experiments on a two-segment pneumatic soft robot manipulator. Results demonstrate that the proposed method accurately detects force safety during interactions with deformable obstacles, thereby laying the foundation for real-time safe planning of soft manipulators in delicate, cluttered environments.


Phy-Tac: Toward Human-Like Grasping via Physics-Conditioned Tactile Goals

Lyu, Shipeng, Sheng, Lijie, Wang, Fangyuan, Zhang, Wenyao, Lin, Weiwei, Jia, Zhenzhong, Navarro-Alarcon, David, Guo, Guodong

arXiv.org Artificial Intelligence

Abstract--Humans naturally grasp objects with minimal level required force for stability, whereas robots often rely on rigid, over-squeezing control. T o narrow this gap, we propose a human-inspired physics-conditioned tactile method (Phy-T ac) for force-optimal stable grasping (FOSG) that unifies pose selection, tactile prediction, and force regulation. A physics-based pose selector first identifies feasible contact regions with optimal force distribution based on surface geometry. Then, a physics-conditioned latent diffusion model (Phy-LDM) predicts the tactile imprint under FOSG target. Last, a latent-space LQR controller drives the gripper toward this tactile imprint with minimal actuation, preventing unnecessary compression. Trained on a physics-conditioned tactile dataset covering diverse objects and contact conditions, the proposed Phy-LDM achieves superior tactile prediction accuracy, while the Phy-T ac outperforms fixed-force and GraspNet-based baselines in grasp stability and force efficiency. Experiments on classical robotic platforms demonstrate force-efficient and adaptive manipulation that bridges the gap between robotic and human grasping.